46 research outputs found
Recommended from our members
Data sets and data quality in software engineering
OBJECTIVE - to assess the extent and types of techniques used to manage quality within software engineering data sets. We consider this a particularly interesting question in the context of initiatives to promote sharing and secondary analysis of data sets.
METHOD - we perform a systematic review of available empirical software engineering studies.
RESULTS - only 23 out of the many hundreds of studies assessed, explicitly considered data quality.
CONCLUSIONS - first, the community needs to consider the quality and appropriateness of the data set being utilised; not all data sets are equal. Second, we need more research into means of identifying, and ideally repairing, noisy cases. Third, it should become routine to use sensitivity analysis to assess conclusion stability with respect to the assumptions that must be made concerning noise levels
Recommended from our members
Data sets and data quality in software engineering: Eight years on
Context: We revisit our review of data quality within the context of empirical software engineering eight years on from our PROMISE 2008 article. Objective: To assess the extent and types of techniques used to manage quality within data sets. We consider this a particularly interesting question in the context of initiatives to promote sharing and secondary analysis of data sets. Method: We update the 2008 mapping study through four subsequently published reviews and a snowballing exercise. Results: The original study located only 23 articles explicitly considering data quality. This picture has changed substantially as our updated review now finds 283 articles, however, our estimate is that this still represents perhaps 1% of the total empirical software engineering literature. Conclusions: It appears the community is now taking the issue of data quality more seriously and there is more work exploring techniques to automatically detect (and sometimes repair) noise problems. However, there is still little systematic work to evaluate the various data sets that are widely used for secondary analysis; addressing this would be of considerable benefit. It should also be a priority to work collaboratively with practitioners to add new, higher quality data to the existing corpora
Susceptibility of optimal train schedules to stochastic disturbances of process times
This work focuses on the stochastic evaluation of train schedules computed by a microscopic scheduler of railway operations based on deterministic information. The research question is to assess the degree of sensitivity of various rescheduling algorithms to variations in process times (running and dwell times). In fact, the objective of railway traffic management is to reduce delay propagation and to increase disturbance robustness of train schedules at a network scale. We present a quantitative study of traffic disturbances and their effects on the schedules computed by simple and advanced rescheduling algorithms. Computational results are based on a complex and densely occupied Dutch railway area; train delays are computed based on accepted statistical distributions, and dwell and running times of trains are subject to additional stochastic variations. From the results obtained on a real case study, an advanced branch and bound algorithm, on average, outperforms a First In First Out scheduling rule both in deterministic and stochastic traffic scenarios. However, the characteristic of the stochastic processes and the way a stochastic instance is handled turn out to have a serious impact on the scheduler performance
Algorithm Engineering in Robust Optimization
Robust optimization is a young and emerging field of research having received
a considerable increase of interest over the last decade. In this paper, we
argue that the the algorithm engineering methodology fits very well to the
field of robust optimization and yields a rewarding new perspective on both the
current state of research and open research directions.
To this end we go through the algorithm engineering cycle of design and
analysis of concepts, development and implementation of algorithms, and
theoretical and experimental evaluation. We show that many ideas of algorithm
engineering have already been applied in publications on robust optimization.
Most work on robust optimization is devoted to analysis of the concepts and the
development of algorithms, some papers deal with the evaluation of a particular
concept in case studies, and work on comparison of concepts just starts. What
is still a drawback in many papers on robustness is the missing link to include
the results of the experiments again in the design
Planning rapid transit networks
[EN] Rapid transit construction projects are major endeavours that require long-term planning by several players, including politicians, urban planners, engineers, management consultants, and citizen groups. Traditionally, operations research methods have not played a major role at the planning level but several tools developed in recent years can assist the decision process and help produce tentative network designs that can be submitted to the planners for further evaluation. This article reviews some indices for the quality of a rapid transit network, as well as mathematical models and heuristics that can be used to design networks. © 2011 Elsevier Ltd.This research was partly funded by the Canadian Natural Sciences and Engineering Research Council under grant no. 39682-10, the Spanish Ministry of Science and Innovation under grant no. MTM 2009-14243 and the Junta de AndalucĂa, Spain, under grant no. P09-TEP-5022. This support is gratefully acknowledged. Fig. 10 was kindly provided by Giuseppe Bruno. Thanks are due to a referee who provided several valuable comments on an earlier version of this paper.Laporte, G.; Mesa, J.; Ortega, F.; Perea Rojas Marcos, F. (2011). Planning rapid transit networks. Socio-Economic Planning Sciences. 45(3):95-104. https://doi.org/10.1016/j.seps.2011.02.001S9510445
Towards model-informed precision dosing of piperacillin: multicenter systematic external evaluation of pharmacokinetic models in critically ill adults with a focus on Bayesian forecasting
Purpose: Inadequate piperacillin (PIP) exposure in intensive care unit (ICU) patients threatens therapeutic success. Model-informed precision dosing (MIPD) might be promising to individualize dosing; however, the transferability of published models to external populations is uncertain. This study aimed to externally evaluate the available PIP population pharmacokinetic (PopPK) models.
Methods: A multicenter dataset of 561 ICU patients (11 centers/3654 concentrations) was used for the evaluation of 24 identified models. Model performance was investigated for a priori (A) predictions, i.e., considering dosing records and patient characteristics only, and for Bayesian forecasting, i.e., additionally including the first (B1) or first and second (B2) therapeutic drug monitoring (TDM) samples per patient. Median relative prediction error (MPE) [%] and median absolute relative prediction error (MAPE) [%] were calculated to quantify accuracy and precision.
Results: The evaluation revealed a large inter-model variability (A: MPE - 135.6-78.3% and MAPE 35.7-135.6%). Integration of TDM data improved all model predictions (B1/B2 relative improvement vs. A: |MPE|median_all_models 45.1/67.5%; MAPEmedian_all_models 29/39%). The model by Kim et al. was identified to be most appropriate for the total dataset (A/B1/B2: MPE - 9.8/- 5.9/- 0.9%; MAPE 37/27.3/23.7%), Udy et al. performed best in patients receiving intermittent infusion, and Klastrup et al. best predicted patients receiving continuous infusion. Additional evaluations stratified by sex and renal replacement therapy revealed further promising models.
Conclusion: The predictive performance of published PIP models in ICU patients varied considerably, highlighting the relevance of appropriate model selection for MIPD. Our differentiated external evaluation identified specific models suitable for clinical use, especially in combination with TDM.
Keywords: Intensive care medicine; Model-informed precision dosing; Pharmacokinetics/pharmacodynamics; Piperacillin; Therapeutic drug monitoring
The decision rule approach to optimization under uncertainty: methodology and applications
Dynamic decision-making under uncertainty has a long and distinguished history in operations research. Due to the curse of dimensionality, solution schemes that naĂŻvely partition or discretize the support of the random problem parameters are limited to small and medium-sized problems, or they require restrictive modeling assumptions (e.g., absence of recourse actions). In the last few decades, several solution techniques have been proposed that aim to alleviate the curse of dimensionality. Amongst these is the decision rule approach, which faithfully models the random process and instead approximates the feasible region of the decision problem. In this paper, we survey the major theoretical findings relating to this approach, and we investigate its potential in two applications areas
Excitation dynamics of interacting Rydberg atoms in small lattices
We study the Rydberg excitation dynamics of laser-driven atoms confined in a one-dimensional three-site lattice with open boundary conditions. Different regular excitation patterns are obtained within various parameter regimes. In the case of a weak Rydberg-Rydberg interaction, the excitation probability possesses a nodal structure which is characterized by an envelope with a period inversely proportional to the interaction. For strong Rydberg interaction we observe dipole blockade and antiblockade effects and an appropriate detuning leads to an overall oscillatory behavior of the Rydberg probability density which is modulated only by small oscillations. Besides an exact diagonalization procedure we study the system by performing first and second order perturbation theory as well as a spectral analysis